171 research outputs found

    Using Links to prototype a Database Wiki

    Get PDF
    Both relational databases and wikis have strengths that make them attractive for use in collaborative applications. In the last decade, database-backed Web applications have been used extensively to develop valuable shared biological references called curated databases. Databases offer many advantages such as scalability, query optimization and concurrency control, but are not easy to use and lack other features needed for collaboration. Wikis have become very popular for early-stage biocuration projects because they are easy to use, encourage sharing and collaboration, and provide built-in support for archiving, history-tracking and annotation. However, curation projects often outgrow the limited capabilities of wikis for structuring and efficiently querying data at scale, necessitating a painful phase transition to a database-backed Web application. We perceive a need for a new class of general-purpose system, which we call a Database Wiki, that combines flexible wiki-like support for collaboration with robust database-like capabilities for structuring and querying data. This paper presents DBWiki, a design prototype for such a system written in the Web programming language Links. We present the architecture, typical use, and wiki markup language design for DBWiki and discuss features of Links that provided unique advantages for rapid Web/database application prototyping

    Accumulating bindings

    Get PDF
    We give a Haskell implementation of Filinski’s normalisation by evaluation algorithm for the computational lambda-calculus with sums. Taking advantage of extensions to the GHC compiler, our implementation represents object language types as Haskell types and ensures that type errors are detected statically. Following Filinski, the implementation is parameterised over a residualising monad. The standard residualising monad for sums is a continuation monad. Defunctionalising the uses of the continuation monad we present the binding tree monad as an alternative.

    Encapsulating effects

    Get PDF

    Effect handlers via generalised continuations

    Get PDF
    Plotkin and Pretnar's effect handlers offer a versatile abstraction for modular programming with user-defined effects. This paper focuses on foundations for implementing effect handlers, for the three different kinds of effect handlers that have been proposed in the literature: deep, shallow, and parameterised. Traditional deep handlers are defined by folds over computation trees, and are the original construct proposed by Plotkin and Pretnar. Shallow handlers are defined by case splits (rather than folds) over computation trees. Parameterised handlers are deep handlers extended with a state value that is threaded through the folds over computation trees. We formulate the extensions both directly and via encodings in terms of deep handlers, and illustrate how the direct implementations avoid the generation of unnecessary closures. We give two distinct foundational implementations of all the kinds of handlers we consider: a continuation passing style (CPS) transformation and a CEK-style abstract machine. In both cases, the key ingredient is a generalisation of the notion of continuation to accommodate stacks of effect handlers. We obtain our CPS translation through a series of refinements as follows. We begin with a first-order CPS translation into untyped lambda calculus which manages a stack of continuations and handlers as a curried sequence of arguments. We then refine the initial CPS translation by uncurrying it to yield a properly tail-recursive translation, and then moving towards more and more intensional representations of continuations in order to support different kinds of effect handlers. Finally, we make the translation higher-order in order to contract administrative redexes at translation time. Our abstract machine design then uses the same generalised continuation representation as the CPS translation. We have implemented both the abstract machine and the CPS transformation (plus extensions) as backends for the Links web programming language

    Normalisation by Evaluation in the Compilation of Typed Functional Programming Languages

    Get PDF
    This thesis presents a critical analysis of normalisation by evaluation as a technique for speeding up compilation of typed functional programming languages. Our investigation focuses on the SML.NET compiler and its typed intermediate language MIL. We implement and measure the performance of normalisation by evaluation for MIL across a range of benchmarks. Taking a different approach, we also implement and measure the performance of a graph-based shrinking reductions algorithm for SML.NET. MIL is based on Moggi’s computational metalanguage. As a stepping stone to normalisation by evaluation, we investigate strong normalisation of the computational metalanguage by introducing an extension of Girard-Tait reducibility. Inspired by previous work on local state and parametric polymorphism, we define reducibility for continuations and more generally reducibility for frame stacks. First we prove strong normalistion for the computational metalanguage. Then we extend that proof to include features of MIL such as sums and exceptions. Taking an incremental approach, we construct a collection of increasingly sophisticated normalisation by evaluation algorithms, culminating in a range of normalisation algorithms for MIL. Congruence rules and alpha-rules are captured by a compositional parameterised semantics. Defunctionalisation is used to eliminate eta-rules. Normalisation by evaluation for the computational metalanguage is introduced using a monadic semantics. Variants in which the monadic effects are made explicit, using either state or control operators, are also considered. Previous implementations of normalisation by evaluation with sums have relied on continuation-passing-syle or control operators. We present a new algorithm which instead uses a single reference cell and a zipper structure. This suggests a possible alternative way of implementing Filinski’s monadic reflection operations. In order to obtain benchmark results without having to take into account all of the features of MIL, we implement two different techniques for eliding language constructs. The first is not semantics-preserving, but is effective for assessing the efficiency of normalisation by evaluation algorithms. The second is semantics-preserving, but less flexible. In common with many intermediate languages, but unlike the computational metalanguage, MIL requires all non-atomic values to be named. We use either control operators or state to ensure each non-atomic value is named. We assess our normalisation by evaluation algorithms by comparing them with a spectrum of progressively more optimised, rewriting-based normalisation algorithms. The SML.NET front-end is used to generate MIL code from ML programs, including the SML.NET compiler itself. Each algorithm is then applied to the generated MIL code. Normalisation by evaluation always performs faster than the most naıve algorithms— often by orders of magnitude. Some of the algorithms are slightly faster than normalisation by evaluation. Closer inspection reveals that these algorithms are in fact defunctionalised versions of normalisation by evaluation algorithms. Our normalisation by evaluation algorithms perform unrestricted inlining of functions. Unrestricted inlining can lead to a super-exponential blow-up in the size of target code with respect to the source. Furthermore, the worst-case complexity of compilation with unrestricted inlining is non-elementary in the size of the source code. SML.NET alleviates both problems by using a restricted form of normalisation based on Appel and Jim’s shrinking reductions. The original algorithm is quadratic in the worst case. Using a graph-based representation for terms we implement a compositional linear algorithm. This speeds up the time taken to perform shrinking reductions by up to a factor of fourteen, which leads to an improvement of up to forty percent in total compile time

    Effect handlers via generalised continuations

    Get PDF

    Practical Normalization by Evaluation for EDSLs

    Get PDF
    Embedded domain-specific languages (eDSLs) are typically implemented in a rich host language, such as Haskell, using a combination of deep and shallow embedding techniques. While such a combination enables programmers to exploit the execution mechanism of Haskell to build and specialize eDSL programs, it blurs the distinction between the host language and the eDSL. As a consequence, extension with features such as sums and effects requires a significant amount of ingenuity from the eDSL designer. In this paper, we demonstrate that Normalization by Evaluation (NbE) provides a principled framework for building, extending, and customizing eDSLs. We present a comprehensive treatment of NbE for deeply embedded eDSLs in Haskell that involves a rich set of features such as sums, arrays, exceptions and state, while addressing practical concerns about normalization such as code expansion and the addition of domain-specific features

    Implementing deterministic declarative concurrency using sieves

    Get PDF
    corecore